Ollie Liu

prof_pic.jpg

I’m a second-year Ph.D student in Computer Science at University of Southern California, fortunate to be co-advised by Prof. Dani Yogatama and Prof. Willie Neiswanger. In life, my friends call me Oliver 🫒

I’m passionate about developing multimodal foundation models that accelerate scientific discovery. My current research includes foundation model pre-training, understanding and exploring their capabilities and limitations, and continual training to align them with human desiderata.

Before USC, I was a researcher in continuous optimization with Prof. Jorge Nocedal at Northwestern University. Even before that, I did my B.S+M.S at Carnegie Mellon University, majoring in machine learning.

I love spending time with my border collie pup Doodle! He lives with my family in the beautiful State of Washington.

news

Apr 18, 2024 I gave a talk on DeLLMa at the Information Science Institute NLG Seminar ✌️
Apr 1, 2024 We introduce IsoBench🔥, an evaluation suite that benchmarks multimodal foundation models on isomorphic representations!
Mar 13, 2024 Our work, On Retrieval Augmentation and the Limitations of Language Model Training, has been accepted to NAACL 2024 🇲🇽
Feb 6, 2024 New preprint available! We introduce DeLLMa🤔, a large language model based framework for making rational decisions under uncertainty.
Jan 16, 2024 Our paper Interpretable Diffusion via Information Decomposition has been accepted for poster presentation at ICLR 2024! First time traveling to Vienna ✈️🇦🇹

selected publications

  1. IsoBench: Benchmarking Multimodal Foundation Models on Isomorphic Representations
    Deqing Fu*, Ghazal Khalighinejad*, Ollie Liu*, and 4 more authors
    2024
  2. DeLLMa: A Framework for Decision Making Under Uncertainty with Large Language Models
    Ollie Liu*, Deqing Fu*, Dani Yogatama, and 1 more author
    arXiv preprint arXiv:2402.02392, 2024
  3. Interpretable Diffusion via Information Decomposition
    Xianghao Kong*, Ollie Liu*, Han Li, and 2 more authors
    arXiv preprint arXiv:2310.07972, 2023
  4. How does GPT-2 compute greater-than?: Interpreting mathematical abilities in a pre-trained language model
    Michael Hanna, Ollie Liu, and Alexandre Variengien
    In Thirty-seventh Conference on Neural Information Processing Systems, 2023